Comparing the Quality of Crowdsourced Data Contributed by Expert and Non-Experts

نویسندگان

  • Linda See
  • Alexis Comber
  • Carl Salk
  • Steffen Fritz
  • Marijn van der Velde
  • Christoph Perger
  • Christian Schill
  • Ian McCallum
  • Florian Kraxner
  • Michael Obersteiner
چکیده

There is currently a lack of in-situ environmental data for the calibration and validation of remotely sensed products and for the development and verification of models. Crowdsourcing is increasingly being seen as one potentially powerful way of increasing the supply of in-situ data but there are a number of concerns over the subsequent use of the data, in particular over data quality. This paper examined crowdsourced data from the Geo-Wiki crowdsourcing tool for land cover validation to determine whether there were significant differences in quality between the answers provided by experts and non-experts in the domain of remote sensing and therefore the extent to which crowdsourced data describing human impact and land cover can be used in further scientific research. The results showed that there was little difference between experts and non-experts in identifying human impact although results varied by land cover while experts were better than non-experts in identifying the land cover type. This suggests the need to create training materials with more examples in those areas where difficulties in identification were encountered, and to offer some method for contributors to reflect on the information they contribute, perhaps by feeding back the evaluations of their contributed data or by making additional training materials available. Accuracies were also found to be higher when the volunteers were more consistent in their responses at a given location and when they indicated higher confidence, which suggests that these additional pieces of information could be used in the development of robust measures of quality in the future.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multiple Clustering Views from Multiple Uncertain Experts

Expert input can improve clustering performance. In today’s collaborative environment, the availability of crowdsourced multiple expert input is becoming common. Given multiple experts’ inputs, most existing approaches can only discover one clustering structure. However, data is multi-faceted by nature and can be clustered in different ways (also known as views). In an exploratory analysis prob...

متن کامل

I Want to Believe: Journalists and Crowdsourced Accuracy Assessments in Twitter

Evaluating information accuracy in social media is an increasingly important and well-studied area, but limited research has compared journalist-sourced accuracy assessments to their crowdsourced counterparts. Œis paper demonstrates the di‚erences between these two populations by comparing the features used to predict accuracy assessments in two TwiŠer data sets: CREDBANK and PHEME. While our €...

متن کامل

Crowdsourcing: It Matters Who the Crowd Are. The Impacts of between Group Variations in Recording Land Cover

Volunteered geographical information (VGI) and citizen science have become important sources data for much scientific research. In the domain of land cover, crowdsourcing can provide a high temporal resolution data to support different analyses of landscape processes. However, the scientists may have little control over what gets recorded by the crowd, providing a potential source of error and ...

متن کامل

Comparison of Experts and Rangers’ Opinions on Prioritizing Barriers in Participation of Rangers in Range Plans (Case Study: Tehran Province- Lar Moor)

There are some barriers for rangers to take part in range plan projects andevaluation. Their participation is very useful for range managers to plan and provide solutionsfor solving problems. This study aims at comparing barriers of participating from rangers andexperts point of view in Lar moor rangelands, Tehran, Iran. In this research, data werecollected based on documentation-library and fi...

متن کامل

Experiments with crowdsourced re-annotation of a POS tagging data set

Crowdsourcing lets us collect multiple annotations for an item from several annotators. Typically, these are annotations for non-sequential classification tasks. While there has been some work on crowdsourcing named entity annotations, researchers have largely assumed that syntactic tasks such as part-of-speech (POS) tagging cannot be crowdsourced. This paper shows that workers can actually ann...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره 8  شماره 

صفحات  -

تاریخ انتشار 2013